Foundation models are redefining how AI systems are built. Practitioners now follow a standard procedure to build their machine learning solutions: download a copy of a foundation model, and fine-tune it using some in-house data about the target task of interest. Consequently, the Internet is swarmed by a handful of foundation models fine-tuned on many diverse tasks. Yet, these individual fine-tunings often lack strong generalization and exist in isolation without benefiting from each other. In our opinion, this is a missed opportunity, as these specialized models contain diverse features. Based on this insight, we propose model recycling, a simple strategy that leverages multiple fine-tunings of the same foundation model on diverse auxiliary tasks, and repurposes them as rich and diverse initializations for the target task. Specifically, model recycling fine-tunes in parallel each specialized model on the target task, and then averages the weights of all target fine-tunings into a final model. Empirically, we show that model recycling maximizes model diversity by benefiting from diverse auxiliary tasks, and achieves a new state of the art on the reference DomainBed benchmark for out-of-distribution generalization. Looking forward, model recycling is a contribution to the emerging paradigm of updatable machine learning where, akin to open-source software development, the community collaborates to incrementally and reliably update machine learning models.
translated by 谷歌翻译
Does the dominant approach to learn representations (as a side effect of optimizing an expected cost for a single training distribution) remain a good approach when we are dealing with multiple distributions. Our thesis is that such scenarios are better served by representations that are "richer" than those obtained with a single optimization episode. This is supported by a collection of empirical results obtained with an apparently na\"ive ensembling technique: concatenating the representations obtained with multiple training episodes using the same data, model, algorithm, and hyper-parameters, but different random seeds. These independently trained networks perform similarly. Yet, in a number of scenarios involving new distributions, the concatenated representation performs substantially better than an equivalently sized network trained from scratch. This proves that the representations constructed by multiple training episodes are in fact different. Although their concatenation carries little additional information about the training task under the training distribution, it becomes substantially more informative when tasks or distributions change. Meanwhile, a single training episode is unlikely to yield such a redundant representation because the optimization process has no reason to accumulate features that do not incrementally improve the training performance.
translated by 谷歌翻译
在易于优化和强大的分布(OOD)概括之间通常存在困境。例如,许多OOD方法依赖于优化具有挑战性的罚款术语。他们要么太强大,无法可靠地优化,要么太虚弱而无法实现目标。我们建议用丰富的表示,其中包含一个潜在有用功能的调色板初始化网络,即使是简单的模型也可以使用。一方面,丰富的表示为优化器提供了良好的初始化。另一方面,它还提供了有助于OOD概括的电感偏差。这种表示形式是由丰富的功能构建(RFC)算法(也称为盆景算法)构建的,该算法由一系列培训情节组成。在发现剧集中,我们以防止网络使用以前迭代中构建的功能的方式制作了多目标优化标准及其相关数据集。在合成事件中,我们使用知识蒸馏来迫使网络同时代表所有先前发现的特征。用盆景表示的网络初始化,始终有助于六种OOD方法在ColoredMnist基准上实现最佳性能。相同的技术在Wilds Camelyon17任务上大大优于可比较的结果,消除了困扰其他方法的高结果差异,并使超参数调谐和模型选择更加可靠。
translated by 谷歌翻译
The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first section introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a practical and theoretically grounded direction towards solving these problems, while introducing new tools to study them.
translated by 谷歌翻译
在不同尺度上代表物理信号是工程中最具挑战性的问题之一。已经开发了几种多尺度建模工具来描述由\ emph {部分微分方程}(PDES)控制的物理系统。这些工具处于原则性物理模型和数值模式的十字路口。最近,与数值求解器相比,已经引入了数据驱动的模型来加快PDE溶液的近似值。在这些最新数据驱动的方法中,神经积分运算符是一个学习函数空间之间映射的类。这些功能在图形(网格)上离散化,适用于在物理现象中建模相互作用。在这项工作中,我们使用\ emph {消息传递图神经网络}(mpgnns)近似的积分内核操作员研究了三个多分辨率架构。为了验证我们的研究,我们通过考虑稳定且不稳定的PDE进行了精心选择的指标进行广泛的MPGNN实验。
translated by 谷歌翻译
稀疏矩阵分解是近似矩阵$ \ mathbf {z} $ j $稀疏因素$ \ mathbf {x} ^ {(j)} \ mathbf {x} ^ {(j-1)的乘积的问题} \ ldots \ mathbf {x} ^ {(1)} $。本文旨在鉴于在稀疏限制问题良好地提出的情况下更好地理解,鉴于此问题的可识别性问题。我们提供了将矩阵分解成\ emph {两个}稀疏因素的问题承认唯一的解决方案,最多达到不可避免的置换和缩放等效命令。我们的一般框架考虑了一系列规定的稀疏模式,允许我们捕获更多的稀疏性概念,而不是简单的非零条目的计数。这些条件被证明与精确矩阵分解的基本唯一性有关,以秩一矩阵的总和,具有结构的稀疏性约束。特别地,在固定支持稀疏矩阵分子的情况下,我们基于秩一矩阵完成性为可识别性提供一般的条件,并且我们从它源自完井算法,可以验证是否满足此充分条件,并恢复如果是这种情况,这两个稀疏因素中的条目。伴随文件进一步利用这些条件来导出用于多层稀疏矩阵分解的可识别性特性和理论上声音分解方法,以及与诸如Hadamard或离散傅里叶变换的一些众所周知的快速变换相关联的支持约束。
translated by 谷歌翻译
许多众所周知的矩阵$ Z $与FORMS $ z = x ^ j \ ldots x ^ 1 $相对应的快速变换相关联,其中每个因素$ x ^ \ ell $稀疏和可能结构化。本文研究了这种因素的基本独特性。我们的第一个主要贡献是证明具有所谓的蝴蝶结构的任何$ n \ times n $矩阵承认为$ j $蝴蝶因子(其中$ n = 2 ^ $),并且这些因素可以是通过分层分解方法恢复。这与现有的方法形成对比,其通过梯度下降将蝴蝶因子产品拟合到给定基质的乘积。该提出的方法可以特别应用于检索Hadamard或离散傅里叶变换矩阵的尺寸为2 ^ j $的分解。计算此类构建的成本$ \ mathcal {o}(n ^ 2)$,它是密集矩阵 - 矢量乘法的顺序,而获得的因子化使能快速$ \ mathcal {o}(n \ log n)$矩阵 - 矢量乘法。此分层标识性属性依赖于最近建立的两层和固定支持设置中的简单标识性条件。虽然蝴蝶结构对应于每个因素的固定规定的支撑,但我们的第二款贡献是通过允许的稀疏模式的更多普通家庭获得可识别性结果,同时考虑到不可避免的诽谤歧义。通常,我们通过分层范式展示了分离傅里叶变换矩阵的蝴蝶分解矩阵为2 ^ j $承认为$ 2 $ 2 $-al-dialAlysity的$ 2 $-ad-assity时,将独特的稀疏因子分解为$ j $ factors。关于每个因素。
translated by 谷歌翻译
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.
translated by 谷歌翻译